188 research outputs found
Automated Speed and Lane Change Decision Making using Deep Reinforcement Learning
This paper introduces a method, based on deep reinforcement learning, for
automatically generating a general purpose decision making function. A Deep
Q-Network agent was trained in a simulated environment to handle speed and lane
change decisions for a truck-trailer combination. In a highway driving case, it
is shown that the method produced an agent that matched or surpassed the
performance of a commonly used reference model. To demonstrate the generality
of the method, the exact same algorithm was also tested by training it for an
overtaking case on a road with oncoming traffic. Furthermore, a novel way of
applying a convolutional neural network to high level input that represents
interchangeable objects is also introduced
Combining Coordination of Motion Actuators with Driver Steering Interaction
Objective: A new method is suggested for coordination of vehicle motion actuators; where driver feedback and capabilities become natural elements in the prioritization.
</br></br>
Methods: The method is using a weighted least squares control allocation formulation, where driver characteristics can be added as virtual force constraints. The approach is in particular suitable for heavy commercial vehicles that in general are over actuated. The method is applied, in a specific use case, by running a simulation of a truck applying automatic braking on a split friction surface. Here the required driver steering angle, to maintain the intended direction, is limited by a constant threshold. This constant is automatically accounted for when balancing actuator usage in the method.
</br></br>
Results: Simulation results show that the actual required driver steering angle can be expected to match the set constant well. Furthermore, the stopping distance is very much affected by this set capability of the driver to handle the lateral disturbance, as expected.
</br></br>
Conclusion: In general the capability of the driver to handle disturbances should be estimated in real-time, considering driver mental state. By using the method it will then be possible to estimate e.g. stopping distance implied from this. The setup has the potential of even shortening the stopping distance, when the driver is estimated as active, this compared to currently available systems. The approach is feasible for real-time applications and requires only measurable vehicle quantities for parameterization. Examples of other suitable applications in scope of the method would be electronic stability control, lateral stability control at launch and optimal cornering arbitration
Combining Planning and Deep Reinforcement Learning in Tactical Decision Making for Autonomous Driving
Tactical decision making for autonomous driving is challenging due to the
diversity of environments, the uncertainty in the sensor information, and the
complex interaction with other road users. This paper introduces a general
framework for tactical decision making, which combines the concepts of planning
and learning, in the form of Monte Carlo tree search and deep reinforcement
learning. The method is based on the AlphaGo Zero algorithm, which is extended
to a domain with a continuous state space where self-play cannot be used. The
framework is applied to two different highway driving cases in a simulated
environment and it is shown to perform better than a commonly used baseline
method. The strength of combining planning and learning is also illustrated by
a comparison to using the Monte Carlo tree search or the neural network policy
separately
Driver Response to Automatic Braking under Split Friction Conditions
At normal pedal braking on split-μ a driver can actively steer or adjust brake level to control lateral drift. The same driver response and thus lateral deviation cannot be assumed when brakes are automatically triggered by a collision mitigation system, since the driver can be expected as less attentive. To quantify lateral deviation in this scenario a test was run at 50 km/h with 12 unaware drivers in a heavy truck. Brakes were configured to emulate automatic braking on split-μ. Results show that the produced maximum lateral deviation from the original direction was 0.25 m on average. Two drivers deviated by 0.5 m. This can be compared to 2.2 m which was reached when steering was held fixed
Development of simplified air drag models including crosswinds for commercial heavy vehicle combinations
Accurate range prediction requires good knowledge of the prevailing wind conditions and how they affect the energy consumption of the ego vehicle. A few different simplified vehicle air drag models that explicitly include the effect from crosswinds are presented and compared through some objective criteria. The models are developed from the normal air drag equation where the effect from wind is implicit and therefore often forgotten or neglected. The purpose is to find a low-complexity model complementing CFD models and wind tunnel tests, that can be used for range estimation and predictive energy management algorithms. To simplify online estimation, a requirement is that the air drag models only contain a few tuning parameters. The models are validated against CFD calculations for a few vehicle combinations and the best models show good accuracy for air attack angles up to at least 60 degrees. It is shown that the parameters of the simplified models can loosely be connected to some basic geometrical attributes of a vehicle combination so it should be possible to give at least a rough estimate of the parameters of a simplified model based on these geometrical attributes. This is useful for making a first estimate of the aerodynamic properties of a vehicle combination after major changes in the exterior, e.g. when adding a trailer. It also highlights that the size and the shape of the vehicle side may be mainly responsible for the high longitudinal air drag sensitivity to crosswinds for large vehicle combinations
Interaction-Aware Trajectory Prediction and Planning in Dense Highway Traffic using Distributed Model Predictive Control
In this paper we treat optimal trajectory planning for an autonomous vehicle
(AV) operating in dense traffic, where vehicles closely interact with each
other. To tackle this problem, we present a novel framework that couples
trajectory prediction and planning in multi-agent environments, using
distributed model predictive control. A demonstration of our framework is
presented in simulation, employing a trajectory planner using non-linear model
predictive control. We analyze performance and convergence of our framework,
subject to different prediction errors. The results indicate that the obtained
locally optimal solutions are improved, compared with decoupled prediction and
planning
Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation
Reinforcement learning (RL) can be used to create a tactical decision-making
agent for autonomous driving. However, previous approaches only output
decisions and do not provide information about the agent's confidence in the
recommended actions. This paper investigates how a Bayesian RL technique, based
on an ensemble of neural networks with additional randomized prior functions
(RPF), can be used to estimate the uncertainty of decisions in autonomous
driving. A method for classifying whether or not an action should be considered
safe is also introduced. The performance of the ensemble RPF method is
evaluated by training an agent on a highway driving scenario. It is shown that
the trained agent can estimate the uncertainty of its decisions and indicate an
unacceptable level when the agent faces a situation that is far from the
training distribution. Furthermore, within the training distribution, the
ensemble RPF agent outperforms a standard Deep Q-Network agent. In this study,
the estimated uncertainty is used to choose safe actions in unknown situations.
However, the uncertainty information could also be used to identify situations
that should be added to the training process
Real-time performance of control allocation for actuator coordination in heavy vehicles
This paper shows how real-time optimisation for actuator coordination, known as control allocation, can be a viable choice for heavy vehicle motion control systems. For this purpose, a basic stability control system implementing the method is presented. The real-time performance of two different control allocation solvers is evaluated and the use of dynamic weighting is analysed. Results show that sufficient vehicle stability can be achieved when using control allocation for actuator coordination in heavy vehicle stability control. Furthermore, real-time simulations indicate that the optimisation can be performed with the computational capacity of today's standard electronic control units. © 2009 IEEE
Circulating luteinizing hormone receptor inhibitor(s) in boys with chronic renal failure
Circulating luteinizing hormone receptor inhibitor(s) in boys with chronic renal failure. Patients with chronic renal failure frequently have hypogonadism. To elucidate the molecular mechanisms involved, we tested the ability of serum from these patients to inhibit recombinant human luteinizing hormone receptors. Using a cell line expressing functional human luteinizing hormone receptors, we found that adenosine 3′,5′-monophosphate (cAMP) production was markedly inhibited by sera from the patients, but not by sera from healthy subjects. Inhibition of cAMP production was associated with inhibition of 125I-human chorionic gonadotropin binding. Inhibition of LH receptors by sera from patients correlated with the glomerular filtration rate and after renal allograft transplantation, decreased. Fractionation of serum samples indicated the receptor-inhibiting activity in proteins of molecular weights from 30,000 to 60,000 Daltons. When characterized and purified, the factor responsible may well be a new LH receptor antagonist of clinical significance
- …